Welcome to FairyR1-32B created by PKU-DS-LAB!

Benchmark DeepSeek-R1-671B DeepSeek-R1-Distill-Qwen-32B FairyR1-32B (PKU)
AIME 2024 (Math) 79.8 72.6 80.4
AIME 2025 (Math) 70.0 52.9 75.6
LiveCodeBench (Code) 65.9 57.2 67.7
GPQA-Diamond (Sci-QA) 71.5 62.1 60.0

Introduction

FairyR1-32B, a highly efficient large-language-model (LLM) that matches or exceeds larger models on select tasks despite using only ~5% of their parameters. Built atop the DeepSeek-R1-Distill-Qwen-32B base, FairyR1-32B leverages a novel “distill-and-merge” pipeline—combining task-focused fine-tuning with model-merging techniques to deliver competitive performance with drastically reduced size and inference cost. This project was funded by NSFC, Grant 624B2005.

Model Details

The FairyR1 model represents a further exploration of our earlier work TinyR1, retaining the core “Branch-Merge Distillation” approach while introducing refinements in data processing and model architecture.

In this effort, we overhauled the distillation data pipeline: raw examples from datasets such as AIMO/NuminaMath-1.5 for mathematics and OpenThoughts-114k for code were first passed through multiple 'teacher' models to generate candidate answers. These candidates were then carefully selected, restructured, and refined, especially for the chain-of-thought(CoT). Subsequently, we applied multi-stage filtering—including automated correctness checks for math problems and length-based selection (2K–8K tokens for math samples, 4K–8K tokens for code samples). This yielded two focused training sets of roughly 6.6K math examples and 3.8K code examples.

On the modeling side, rather than training three separate specialists as before, we limited our scope to just two domain experts (math and code), each trained independently under identical hyperparameters (e.g., learning rate and batch size) for about five epochs. We then fused these experts into a single 32B-parameter model using the AcreeFusion tool. By streamlining both the data distillation workflow and the specialist-model merging process, FairyR1 achieves task-competitive results with only a fraction of the parameters and computational cost of much larger models.

Result Analysis and Key Contributions:

From the test results, FairyR1 scored slightly higher than DeepSeek-R1-671B on the AIME 2025 and LiveCodeBench benchmarks, and performed comparably on AIME 2024.

These results indicate that, by building on the DeepSeek‑R1‑Distill‑Qwen‑32B base and applying targeted techniques, FairyR1 achieves comparable or slightly superior performance in mathematical and programming domains using only about 5% of the parameter count of much larger models, although performance gaps may remain in other fields such as scientific question answering.

This work demonstrates the feasibility of significantly reducing model size and potential inference cost through optimized data processing and model fusion techniques while maintaining strong task-specific performance.

Model Description

  • Developed by: PKU-DS-LAB
  • Model type: Reasoning Model
  • Language(s) (NLP): English, Chinese
  • License: apache-2.0
  • Finetuned from model: DeepSeek-R1-Distill-Qwen-32B

Training Data

Hardware Utilization

  • Hardware Type: 32 × NVIDIA-H100
  • Hours used(Math): 2.5h
  • Hours used(Coding): 1.5h
  • Model Merging: about 40min on CPU, no GPU needed.

Evaluation Set

  • AIME 2024/2025 (math): We evaluate 32 times and report the average accuracy. AIME 2024 contains 30 problems. AIME 2025 consists of Part I and Part II, with a total of 30 questions.
  • LiveCodeBench (code): We evaluate 8 times and report the average accuracy. The dataset version is "release_v5" (date range: 2024-08-01 to 2025-02-01), consisting of 279 problems.
  • GPQA-Diamond (Sci-QA): We evaluate 8 times and report the average accuracy. The dataset consists of 198 problems.

FairyR1 series Team Members:

Leading By:

Tong Yang

Core Contributors:

Wang Li; Junting Zhou; Wenrui Liu; Yilun Yao; Rongle Wang

Model Card Contact

For more details, please contact: yangtong@pku.edu.cn

Downloads last month
1,129
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for PKU-DS-LAB/FairyR1-32B

Finetuned
(65)
this model
Quantizations
6 models

Collection including PKU-DS-LAB/FairyR1-32B